21 research outputs found

    Learning with distance

    Get PDF
    The two main, competing, paradigms in Artificial Intelligence are the numeric (vector-space) and the symbolic approaches. The debate on which approach is the best for modelling intelligence has been called the ’central debate in AI’. ETS is an inductive learning model that unifies these two, competing, approaches to learning. ETS uses a distance function to define a class and also uses distance to direct the learning process. An ETS algorithm is applied to the Monk’s Problems, a set of problems designed to evaluate the performance of modern learning algorithms - whether numeric and symbolic.peer-reviewe

    Performing fusion of news reports through the construction of conceptual graphs

    Get PDF
    As events occur around the world, different reports about them will be posted on various web portals. Different news agencies write their own report based on the information obtained by its reports on site or through its contacts – thus each report may have its own ‘unique’ information. A per- son interested in a particular event may read various reports about that event from different sources to get all the available information. In our research, we are attempting to fuse all the different pieces of information found in the different reports about the same event into one report – thus providing the user with one document where he/she can find all the information related to the event in question. We attempt to do this by constructing conceptual graph representations of the different news reports, and then merging those graphs together. To evaluate our system, we are building an operational system which will display on a web portal fused reports on events which are currently in the news. Web users can then grade the system on its effectiveness.peer-reviewe

    Automatic classification of web pages into bookmark categories

    Get PDF
    We describe a technique to automatically classify a web page into an existing bookmark category whenever a user decides to bookmark a page. HyperBK compares a bag-of-words representation of the page to descriptions of categories in the user’s bookmark file. Unlike default web browser dialogs in which the user may be presented with the category into which he or she saved the last bookmarked file, HyperBK also offers the category most similar to the page being bookmarked. The user can opt to save the page to the last category used; create a new category; or save the page elsewhere. In an evaluation, the user’s preferred category was offered on average 67% of the time.peer-reviewe

    System for spatio-temporal analysis of online news and blogs

    Get PDF
    Previous work on spatio-temporal analysis of news items and other documents has largely focused on broad categorization of small text collections by region or country. A system for large- scale spatio-temporal analysis of online news media and blogs is presented, together with an analysis of global news media coverage over a nine year period. We demonstrate the benefits of using a hierarchical geospatial database to disambiguate between geographical named entities, and provide results for an extremely fine-grained analysis of news items. Aggregate maps of media attention for particular places around the world are compared with geographical and socio-economic data. Our analysis suggests that GDP per capita is the best indicator for media attention.peer-reviewe

    Evolving viable pitch contours

    Get PDF
    At a very basic level, a piece of music can be defined as an organised arrangement of sounds occurring both sequentially (as in melody) and concurrently (as in harmony). As music evolved into a science and an established form of art, people started studying the characteristics of these sounds and drew sets of guidelines and rules, that if followed would produce pieces of music that are aesthetically more pleasing than others. Early examples can be seen in Pythagoras’ observations and experiments with blacksmiths’ hammers. Allegedly some 2500 years ago, he was walking by a blacksmith’s shop when he heard the ringing tones of hammers hitting an anvil. Upon further observation, he realised that a hammer weighing half as much as a previous one sounded twice as high in pitch (an octave – ratio 2:1). A pair of hammers whose weights had a ratio of 3:2 sounded a fifth apart. Eventually he came to the conclusion that simple ratios sounded good. In this paper, we are concerned with the generation of musical phrases constrained by the rules that governed music developed during the so called Common Practice Period (CPP). This period refers to an era in musical history spanning from the 17th to the early 20th centuries [2] and included the Baroque and Romantic styles amongst others. Colloquially, music in the style of the CPP is sometimes better (but incorrectly) known as ‘Classical’ music.peer-reviewe

    How did I find that : automatically constructing queries from bookmarked web pages and categories

    Get PDF
    We present ‘How Did I Find That?’ (HDIFT), an algorithm to find web pages related to categories of bookmarks (book- mark folders) or individual bookmarks stored in a user’s bookmark (or favorites) file. HDIFT automatically generates a query from the selected bookmarks and categories, submits the query to a third-party search engine, and presents the results to the user. HDIFT’s approach is innovative in that we select keywords to generate the query from a book- marked web page’s parents (other web-based documents that contain a link to the bookmarked web page), rather than from the bookmarked web page itself. Our initial limited evaluation results are promising. Volunteers who participated in the evaluation considered 20% of all query results to be relevant and interesting enough to bookmark. Additionally, 56.9% of the queries generated yielded results sets (of at most 10 results) containing at least one interesting and bookmarkable web page.peer-reviewe

    A brief comparison of real-time software design methods

    Get PDF
    This paper briefly attempts to compare several mainstream methods/methodologies that are used for the analysis and design of real time systems. These are i) CORE, ii) YSM, iii) MASCOT, iv) CODARTS, v) HOOD, vi) ROOM, vii) UML, viii) UML-RT. Methods i-iii are use a data driven approach, whilst methods iv-vii use an object-oriented approach. All these methods have their advantages and disadvantages. Thus it is difficult to decide which method is best suited to a particular real-time design situation. Some methods like YSM, MASCOT and CODARTS are more oriented towards designing event driven systems and reactive behavior. Object oriented methods like the UML have many diagrams obtained from other methods. In the first part of the paper each method is briefly presented and its main features are explained. In the second part a score based ranking is used to try to identify which method has the best overall characteristics for real time development. The final results are presented in a tabular form and using a bar chart. In addition to this it is explained how each method fits in the SDLC. Both the score of each method and how it fits in the SDLC must be considered when selecting methods. To conclude some other issues are explained, because the selection of one method does not automatically imply that there will not be any problems.peer-reviewe

    An ontology of security threats to web applications

    Get PDF
    As the use of the internet for commercial purposes continues to grow, so do the number of security threats which attempt to disrupt online systems. A number of these threats are in fact unintended. For example, a careless employee might drop a cup of coffee onto essential equipment. However, when compared to the brick and mortar world, the internet offers would-be attackers a more anonymous environment in which to operate. Also, the free availability of hacking tools makes it possible even for the curious teenager to carry out dangerous attacks. Despite this ever-present threat however, it is all too often the case that security is dealt with (if at all) after a web application has been developed. This is mainly due to our software development heritage whereby companies prefer to focus on the functionality of new systems because that provides an immediate return on investment. As a precursor to proposing an framework for building security into web applications, this paper presents an ontology of threat to web applications. The thinking behind this is that much the same as in the military world, one needs to have as much intelligence about the enemy as possible, the same can be argued in the case of online security threats. Such an ontology would enable stake holder in online applications to take less of a reactive stance but instead be more proactive by being aware what’s out there.peer-reviewe

    Formal verification of enterprise integration architectures

    Get PDF
    This is a near-finished paper to be presented in an international research conference. Weak Bisimulation is a process calculus equivalence relation, applied for the verification of communicating concurrent systems [Miln 99]. In this paper we propose the application of Weak Bisimulation for Enterprise Application Integration verification. Formal verification is carried out by taking the system specification and design models of an integrated system and converting them into value passing CCS (Calculus of Communicating Systems) processes. If a Weak Bisimulation relation is found between the two models, then it could be concluded that the EI Architecture is a valid one. The formal verification of an EI Architecture would give value to an EI project framework, allowing the challenge of cumbersome and complex testing typically faced by EI projects [Khan 05], to be alleviated, and thus increasing the possibility of a successful EI project, delivered on time and within the stipulated budgeted costs. This paper shows the applicability of value passing CCS (or equivalent) formal notation to model the EI systems characteristics, as well as investigates into the computation complexity of available weak bisimulation algorithms, in order to analyze the applicability of this proposition in real life.peer-reviewe

    Semantically annotating the desktop : towards a personal ontology

    Get PDF
    The advent of the World-Wide Web brought with it a proliferation of information from e-mail, forums, chat, sites that rapidly led to information overload and a subsequent storage problem and maintenance on users’ personal computers. The desktop has become a repository of data that hosts various types of files. The recent massive increase in data has resulted in a continuous attempt to enhance our data organisation techniques and hence to the development of personal information management software. In this paper we present an overview of data organisation techniques related to personal data management that have been an active research area for decades. We will look at how personal information managers handle different types of files, and abstract these file types into a single user inter- face. Despite their advanced user interfaces, we argue that traditional personal information managers tend to be very domain specific and lack in user adaptability. To address these limitations we propose a semantic desktop application that exploits the flexibility of semantic web technologies, and introduces the concept of a Personal Ontology to aid in data organisation and can be used by other desktop applications such as information retrieval and intelligent software agents.peer-reviewe
    corecore